1,093 research outputs found

    A shapley value approach to pricing climate risks

    Get PDF
    This paper prices the risk of climate change by calculating a lower bound for the price of a virtual insurance policy against climate risks associated with the business as usual (BAU) emissions path. In analogy with ordinary insurance pricing, this price depends on the current risk to which society is exposed on the BAU emissions path and on a second emissions path reflecting risks that society is willing to take. The difference in expected damages on these two paths is the price which a risk neutral insurer would charge for the risk swap excluding transaction costs and profits, and it is also a lower bound on society's willingness to pay for this swap. The price is computed by (1) identifying a probabilistic risk constraint that society accepts, (2) computing an optimal emissions path satisfying that constraint using an abatement cost function, (3) computing the extra expected damages from the business as usual path, above those of the risk constrained path, and (4) apportioning those excess damages over the emissions per ton in the various time periods. The calculations follow the 2010 US government social cost of carbon analysis, and are done with DICE2009

    Climate Change Uncertainty Quantification: Lessons Learned from the Joint EU-USNRC Project on Uncertainty Analysis of Probabilistic Accident Consequence Codes

    Get PDF
    Between 1990 and 2000 the U.S. Nuclear Regulatory Commission and the Commission of the European Communities conducted a joint uncertainty analysis of accident consequences for nuclear power plants. This study remains a benchmark for uncertainty analysis of large models involving high risks with high public visibility, and where substantial uncertainty exists. The study set standards with regard to structured expert judgment, performance assessment, dependence elicitation and modeling and uncertainty propagation of high dimensional distributions with complex dependence. The integrated assessment models for the economic effects of climate change also involve high risks and large uncertainties, and interest in conducting a proper uncertainty analysis is growing. This article reviews the EU-USNRC effort and extracts lessons learned, with a view toward informing a comparable effort for the economic effects of climate change.uncertainty analysis, expert judgment, expert elicitation, probabilistic inversion, dependence modeling, nuclear safety

    The Unholy Trinity: Fat Tails, Tail Dependence, and Micro-Correlations

    Get PDF
    Recent events in the financial and insurance markets, as well as the looming challenges of a globally changing climate point to the need to re-think the ways in which we measure and manage catastrophic and dependent risks. Management can only be as good as our measurement tools. To that end, this paper outlines detection, measurement, and analysis strategies for fat-tailed risks, tail dependent risks, and risks characterized by micro-correlations. A simple model of insurance demand and supply is used to illustrate the difficulties in insuring risks characterized by these phenomena. Policy implications are discussed.risk, fat tails, tail dependence, micro-correlations, insurance, natural disasters

    Precursor Analysis for Offshore Oil and Gas Drilling: From Prescriptive to Risk-Informed Regulation

    Get PDF
    The Oil Spill Commission’s chartered mission—to “develop options to guard against … any oil spills associated with offshore drilling in the future” (National Commission 2010)—presents a major challenge: how to reduce the risk of low-frequency oil spill events, and especially high-consequence events like the Deepwater Horizon accident, when historical experience contains few oil spills of material scale and none approaching the significance of the Deepwater Horizon. In this paper, we consider precursor analysis as an answer to this challenge, addressing first its development and use in nuclear reactor regulation and then its applicability to offshore oil and gas drilling. We find that the nature of offshore drilling risks, the operating information obtainable by the regulator, and the learning curve provided by 30 years of nuclear experience make precursor analysis a promising option available to the U.S. Bureau of Ocean Energy Management, Regulation and Enforcement (BOEMRE) to bring cost-effective, risk-informed oversight to bear on the threat of catastrophic oil spills.catastrophic oil spills, quantitative risk analysis, risk-informed regulation

    Cross validation for the classical model of structured expert judgment

    Get PDF
    We update the 2008 TU Delft structured expert judgment database with data from 33 professionally contracted Classical Model studies conducted between 2006 and March 2015 to evaluate its performance relative to other expert aggregation models. We briefly review alternative mathematical aggregation schemes, including harmonic weighting, before focusing on linear pooling of expert judgments with equal weights and performance-based weights. Performance weighting outperforms equal weighting in all but 1 of the 33 studies in-sample. True out-of-sample validation is rarely possible for Classical Model studies, and cross validation techniques that split calibration questions into a training and test set are used instead. Performance weighting incurs an “out-of-sample penalty” and its statistical accuracy out-of-sample is lower than that of equal weighting. However, as a function of training set size, the statistical accuracy of performance-based combinations reaches 75% of the equal weight value when the training set includes 80% of calibration variables. At this point the training set is sufficiently powerful to resolve differences in individual expert performance. The information of performance-based combinations is double that of equal weighting when the training set is at least 50% of the set of calibration variables. Previous out-of-sample validation work used a Total Out-of-Sample Validity Index based on all splits of the calibration questions into training and test subsets, which is expensive to compute and includes small training sets of dubious value. As an alternative, we propose an Out-of-Sample Validity Index based on averaging the product of statistical accuracy and information over all training sets sized at 80% of the calibration set. Performance weighting outperforms equal weighting on this Out-of-Sample Validity Index in 26 of the 33 post-2006 studies; the probability of 26 or more successes on 33 trials if there were no difference between performance weighting and equal weighting is 0.001

    Evaluation of a Performance-Based Expert Elicitation:WHO Global Attribution of Foodborne Diseases

    Get PDF
    For many societally important science-based decisions, data are inadequate, unreliable or non-existent, and expert advice is sought. In such cases, procedures for eliciting structured expert judgments (SEJ) are increasingly used. This raises questions regarding validity and reproducibility. This paper presents new findings from a large-scale international SEJ study intended to estimate the global burden of foodborne disease on behalf of WHO. The study involved 72 experts distributed over 134 expert panels, with panels comprising thirteen experts on average. Elicitations were conducted in five languages. Performance-based weighted solutions for target questions of interest were formed for each panel. These weights were based on individual expert's statistical accuracy and informativeness, determined using between ten and fifteen calibration variables from the experts' field with known values. Equal weights combinations were also calculated. The main conclusions on expert performance are: (1) SEJ does provide a science-based method for attribution of the global burden of foodborne diseases; (2) equal weighting of experts per panel increased statistical accuracy to acceptable levels, but at the cost of informativeness; (3) performance-based weighting increased informativeness, while retaining accuracy; (4) due to study constraints individual experts' accuracies were generally lower than in other SEJ studies, and (5) there was a negative correlation between experts' informativeness and statistical accuracy which attenuated as accuracy improved, revealing that the least accurate experts drive the negative correlation. It is shown, however, that performance-based weighting has the ability to yield statistically accurate and informative combinations of experts' judgments, thereby offsetting this contrary influence. The present findings suggest that application of SEJ on a large scale is feasible, and motivate the development of enhanced training and tools for remote elicitation of multiple, internationally-dispersed panels

    Ice sheet and climate processes driving the uncertainty in projections of future sea level rise: Findings from a structured expert judgement approach.

    Get PDF
    The ice sheets covering Antarctica and Greenland present the greatest uncertainty in, and largest potential contribution to, future sea level rise. The uncertainty arises from a paucity of suitable observations covering the full range of ice sheet behaviors, incomplete understanding of the influences of diverse processes, and limitations in defining key boundary conditions for the numerical models. To investigate the impact of these uncertainties on ice sheet projections we undertook a structured expert judgement study. Here, we interrogate the findings of that study to identify the dominant drivers of uncertainty in projections and their relative importance as a function of ice sheet and time. We find that for the 21st century, Greenland surface melting, in particular the role of surface albedo effects, and West Antarctic ice dynamics, specifically the role of ice shelf buttressing, dominate the uncertainty. The importance of these effects holds under both a high-end 5°C global warming scenario and another that limits global warming to 2°C. During the 22nd century the dominant drivers of uncertainty shift. Under the 5°C scenario, East Antarctic ice dynamics dominate the uncertainty in projections, driven by the possible role of ice flow instabilities. These dynamic effects only become dominant, however, for a temperature scenario above the Paris Agreement 2°C target and beyond 2100. Our findings identify key processes and factors that need to be addressed in future modeling and observational studies in order to reduce uncertainties in ice sheet projections

    Expert elicitation : using the classical model to validate experts' judgments

    Get PDF
    The inclusion of expert judgments along with other forms of data in science, engineering, and decision making is inevitable. Expert elicitation refers to formal procedures for obtaining and combining expert judgments. Expert elicitation is required when existing data and models cannot provide needed information. This makes validating expert judgements a challenge because they are used when other data do not exist, and thus measuring their accuracy is difficult. This article examines the Classical Model of structured expert judgment, which is an elicitation method that includes validation of the experts' assessments against empirical data. In the Classical Model, experts assess both the unknown target questions and a set of calibration questions, which are items from the experts’ field that have observed true values. The Classical Model scores experts on their performance in assessing the calibration questions and then produces performance-weighted combinations of the experts. From 2006 through March 2015, the Classical Model has been used in thirty-three unique applications. Less than one-third of the individual experts in these studies were statistically accurate, highlighting the need for validation. Overall, the performance-based combination of experts produced in the Classical Model is more statistically accurate and more informative than an equal weighting of experts

    Reply to comment on "Suburban watershed nitrogen retention: Estimating the effectiveness of stormwater management structures" by Koch et al. (Elem Sci Anth 3:000063, July 2015)

    Get PDF
    We reply to a comment on our recent structured expert judgment analysis of stormwater nitrogen retention in suburban watersheds. Low relief, permeable soils, a dynamic stream channel, and subsurface flows characterize many lowland Coastal Plain watersheds. These features result in unique catchment hydrology, limit the precision of streamflow measurements, and challenge the assumptions for calculating runoff from rainfall and catchment area. We reiterate that the paucity of high-resolution nitrogen loading data for Chesapeake Bay watersheds warrants greater investment in long-term empirical studies of suburban watershed nutrient budgets for this region
    corecore